1,408 research outputs found
On the Complexity of the k-Level in Arrangements of Pseudoplanes
A classical open problem in combinatorial geometry is to obtain tight
asymptotic bounds on the maximum number of k-level vertices in an arrangement
of n hyperplanes in d dimensions (vertices with exactly k of the hyperplanes
passing below them). This is a dual version of the k-set problem, which, in a
primal setting, seeks bounds for the maximum number of k-sets determined by n
points in d dimensions, where a k-set is a subset of size k that can be
separated from its complement by a hyperplane. The k-set problem is still wide
open even in the plane, with a substantial gap between the best known upper and
lower bounds. The gap gets larger as the dimension grows. In three dimensions,
the best known upper bound is O(nk^(3/2)).
In its dual version, the problem can be generalized by replacing hyperplanes
by other families of surfaces (or curves in the planes). Reasonably sharp
bounds have been obtained for curves in the plane, but the known upper bounds
are rather weak for more general surfaces, already in three dimensions, except
for the case of triangles. The best known general bound, due to Chan is
O(n^2.997), for families of surfaces that satisfy certain (fairly weak)
properties.
In this paper we consider the case of pseudoplanes in 3 dimensions (defined
in detail in the introduction), and establish the upper bound O(nk^(5/3)) for
the number of k-level vertices in an arrangement of n pseudoplanes. The bound
is obtained by establishing suitable (and nontrivial) extensions of dual
versions of classical tools that have been used in studying the primal k-set
problem, such as the Lova'sz Lemma and the Crossing Lemma.Comment: 23 pages, 13 figure
Contemporary Surgical Options for the Aortic Root
Aortic root pathology is diverse, and it is the most common cause of aortic valve incompetence in the United States. Aortic root surgery is undergoing continuous development and refinements. From the original description of Bentall on aortic root replacement, many advances have been made in the field of aortic root surgery. The surgical armamentarium available today provides advanced repair options as well as replacement options for the aortic root. The aim of this chapter is to provide an insight into the basics of aortic root surgery as well as to further describes the current up-to-date solutions for aortic valve and aortic root pathologies
Discriminative Topological Features Reveal Biological Network Mechanisms
Recent genomic and bioinformatic advances have motivated the development of
numerous random network models purporting to describe graphs of biological,
technological, and sociological origin. The success of a model has been
evaluated by how well it reproduces a few key features of the real-world data,
such as degree distributions, mean geodesic lengths, and clustering
coefficients. Often pairs of models can reproduce these features with
indistinguishable fidelity despite being generated by vastly different
mechanisms. In such cases, these few target features are insufficient to
distinguish which of the different models best describes real world networks of
interest; moreover, it is not clear a priori that any of the presently-existing
algorithms for network generation offers a predictive description of the
networks inspiring them. To derive discriminative classifiers, we construct a
mapping from the set of all graphs to a high-dimensional (in principle
infinite-dimensional) ``word space.'' This map defines an input space for
classification schemes which allow us for the first time to state unambiguously
which models are most descriptive of the networks they purport to describe. Our
training sets include networks generated from 17 models either drawn from the
literature or introduced in this work, source code for which is freely
available. We anticipate that this new approach to network analysis will be of
broad impact to a number of communities.Comment: supplemental website:
http://www.columbia.edu/itc/applied/wiggins/netclass
Variance-Covariance Regularization Improves Representation Learning
Transfer learning has emerged as a key approach in the machine learning
domain, enabling the application of knowledge derived from one domain to
improve performance on subsequent tasks. Given the often limited information
about these subsequent tasks, a strong transfer learning approach calls for the
model to capture a diverse range of features during the initial pretraining
stage. However, recent research suggests that, without sufficient
regularization, the network tends to concentrate on features that primarily
reduce the pretraining loss function. This tendency can result in inadequate
feature learning and impaired generalization capability for target tasks. To
address this issue, we propose Variance-Covariance Regularization (VCR), a
regularization technique aimed at fostering diversity in the learned network
features. Drawing inspiration from recent advancements in the self-supervised
learning approach, our approach promotes learned representations that exhibit
high variance and minimal covariance, thus preventing the network from focusing
solely on loss-reducing features.
We empirically validate the efficacy of our method through comprehensive
experiments coupled with in-depth analytical studies on the learned
representations. In addition, we develop an efficient implementation strategy
that assures minimal computational overhead associated with our method. Our
results indicate that VCR is a powerful and efficient method for enhancing
transfer learning performance for both supervised learning and self-supervised
learning, opening new possibilities for future research in this domain.Comment: 16 pages, 2 figure
Differential direct coding: a compression algorithm for nucleotide sequence data
While modern hardware can provide vast amounts of inexpensive storage for biological databases, the compression of nucleotide sequence data is still of paramount importance in order to facilitate fast search and retrieval operations through a reduction in disk traffic. This issue becomes even more important in light of the recent increase of very large data sets, such as metagenomes. In this article, I propose the Differential Direct Coding algorithm, a general-purpose nucleotide compression protocol that can differentiate between sequence data and auxiliary data by supporting the inclusion of supplementary symbols that are not members of the set of expected nucleotide bases, thereby offering reconciliation between sequence-specific and general-purpose compression strategies. This algorithm permits a sequence to contain a rich lexicon of auxiliary symbols that can represent wildcards, annotation data and special subsequences, such as functional domains or special repeats. In particular, the representation of special subsequences can be incorporated to provide structure-based coding that increases the overall degree of compression. Moreover, supporting a robust set of symbols removes the requirement of wildcard elimination and restoration phases, resulting in a complexity of O(n) for execution time, making this algorithm suitable for very large data sets. Because this algorithm compresses data on the basis of triplets, it is highly amenable to interpretation as a polypeptide at decompression time. Also, an encoded sequence may be further compressed using other existing algorithms, like gzip, thereby maximizing the final degree of compression. Overall, the Differential Direct Coding algorithm can offer a beneficial impact on disk traffic for database queries and other disk-intensive operations
Instability of Myelin Tubes under Dehydration: deswelling of layered cylindrical structures
We report experimental observations of an undulational instability of myelin
figures. Motivated by this, we examine theoretically the deformation and
possible instability of concentric, cylindrical, multi-lamellar membrane
structures. Under conditions of osmotic stress (swelling or dehydration), we
find a stable, deformed state in which the layer deformation is given by \delta
R ~ r^{\sqrt{B_A/(hB)}}, where B_A is the area compression modulus, B is the
inter-layer compression modulus, and h is the repeat distance of layers. Also,
above a finite threshold of dehydration (or osmotic stress), we find that the
system becomes unstable to undulations, first with a characteristic wavelength
of order \sqrt{xi d_0}, where xi is the standard smectic penetration depth and
d_0 is the thickness of dehydrated region.Comment: 5 pages + 3 figures [revtex 4
Fluid-membrane tethers: minimal surfaces and elastic boundary layers
Thin cylindrical tethers are common lipid bilayer membrane structures,
arising in situations ranging from micromanipulation experiments on artificial
vesicles to the dynamic structure of the Golgi apparatus. We study the shape
and formation of a tether in terms of the classical soap-film problem, which is
applied to the case of a membrane disk under tension subject to a point force.
A tether forms from the elastic boundary layer near the point of application of
the force, for sufficiently large displacement. Analytic results for various
aspects of the membrane shape are given.Comment: 12 page
Lightweight Lempel-Ziv Parsing
We introduce a new approach to LZ77 factorization that uses O(n/d) words of
working space and O(dn) time for any d >= 1 (for polylogarithmic alphabet
sizes). We also describe carefully engineered implementations of alternative
approaches to lightweight LZ77 factorization. Extensive experiments show that
the new algorithm is superior in most cases, particularly at the lowest memory
levels and for highly repetitive data. As a part of the algorithm, we describe
new methods for computing matching statistics which may be of independent
interest.Comment: 12 page
Reconstructing dynamic regulatory maps
Even simple organisms have the ability to respond to internal and external stimuli. This response is carried out by a dynamic network of protein–DNA interactions that allows the specific regulation of genes needed for the response. We have developed a novel computational method that uses an input–output hidden Markov model to model these regulatory networks while taking into account their dynamic nature. Our method works by identifying bifurcation points, places in the time series where the expression of a subset of genes diverges from the rest of the genes. These points are annotated with the transcription factors regulating these transitions resulting in a unified temporal map. Applying our method to study yeast response to stress, we derive dynamic models that are able to recover many of the known aspects of these responses. Predictions made by our method have been experimentally validated leading to new roles for Ino4 and Gcn4 in controlling yeast response to stress. The temporal cascade of factors reveals common pathways and highlights differences between master and secondary factors in the utilization of network motifs and in condition-specific regulation
- …